Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 1, 2025
-
The authors provide the first tight sample complexity bounds for shadow tomography and classical shadows in the regime where the target error is below some sufficiently small inverse polynomial in the dimension of the Hilbert space. Specifically, they present a protocol that, given any π β π mβN and π β€ π ( π β 1 / 2 ) Ο΅β€O(d β1/2 ), measures π ( log β‘ ( π ) / π 2 ) O(log(m)/Ο΅ 2 ) copies of an unknown mixed state π β πΆ π Γ π ΟβC dΓd and outputs a classical description of π Ο. This description can then be used to estimate any collection of π m observables to within additive accuracy π Ο΅. Previously, even for the simpler case of shadow tomography where observables are known in advance, the best known rates either scaled benignly but suboptimally in all of π , π , π m,d,Ο΅, or scaled optimally in π , π Ο΅,m but included additional polynomial factors in π d. Interestingly, the authors also show via dimensionality reduction that one can rescale π Ο΅ and π d to reduce to the regime where π β€ π ( π β 1 / 2 ) Ο΅β€O(d β1/2 ). Their algorithm draws on representation-theoretic tools developed in the context of full state tomography.more » « less
-
We investigate the problem of predicting the output behavior of unknown quantum channels. Given query access to an n-qubit channel E and an observable O, we aim to learn the mapping Οβ¦Tr(OE[Ο]) to within a small error for most Ο sampled from a distribution D. Previously, Huang, Chen, and Preskill proved a surprising result that even if E is arbitrary, this task can be solved in time roughly nO(log(1/Ο΅)), where Ο΅ is the target prediction error. However, their guarantee applied only to input distributions D invariant under all single-qubit Clifford gates, and their algorithm fails for important cases such as general product distributions over product states Ο. In this work, we propose a new approach that achieves accurate prediction over essentially any product distribution D, provided it is not "classical" in which case there is a trivial exponential lower bound. Our method employs a "biased Pauli analysis," analogous to classical biased Fourier analysis. Implementing this approach requires overcoming several challenges unique to the quantum setting, including the lack of a basis with appropriate orthogonality properties. The techniques we develop to address these issues may have broader applications in quantum information.more » « less
-
The recent proliferation of NISQ devices has made it imperative to understand their power. In this work, we define and study the complexity class , which encapsulates problems that can be efficiently solved by a classical computer with access to noisy quantum circuits. We establish super-polynomial separations in the complexity among classical computation, , and fault-tolerant quantum computation to solve some problems based on modifications of Simonβs problems. We then consider the power of for three well-studied problems. For unstructured search, we prove that cannot achieve a Grover-like quadratic speedup over classical computers. For the Bernstein-Vazirani problem, we show that only needs a number of queries logarithmic in what is required for classical computers. Finally, for a quantum state learning problem, we prove that is exponentially weaker than classical computers with access to noiseless constant-depth quantum circuits.more » « less
-
We develop a general framework for finding approximately-optimal preconditioners for solving linear systems. Leveraging this framework we obtain improved runtimes for fundamental preconditioning and linear system solving problems including the following. \begin{itemize} \item \textbf{Diagonal preconditioning.} We give an algorithm which, given positive definite $$\mathbf{K} \in \mathbb{R}^{d \times d}$$ with $$\mathrm{nnz}(\mathbf{K})$$ nonzero entries, computes an $$\epsilon$$-optimal diagonal preconditioner in time $$\widetilde{O}(\mathrm{nnz}(\mathbf{K}) \cdot \mathrm{poly}(\kappa^\star,\epsilon^{-1}))$$, where $$\kappa^\star$$ is the optimal condition number of the rescaled matrix. \item \textbf{Structured linear systems.} We give an algorithm which, given $$\mathbf{M} \in \mathbb{R}^{d \times d}$$ that is either the pseudoinverse of a graph Laplacian matrix or a constant spectral approximation of one, solves linear systems in $$\mathbf{M}$$ in $$\widetilde{O}(d^2)$$ time. \end{itemize} Our diagonal preconditioning results improve state-of-the-art runtimes of $$\Omega(d^{3.5})$$ attained by general-purpose semidefinite programming, and our solvers improve state-of-the-art runtimes of $$\Omega(d^{\omega})$$ where $$\omega > 2.3$$ is the current matrix multiplication constant. We attain our results via new algorithms for a class of semidefinite programs (SDPs) we call \emph{matrix-dictionary approximation SDPs}, which we leverage to solve an associated problem we call \emph{matrix-dictionary recovery}.more » « less
-
We consider the question of Gaussian mean testing, a fundamental task in high-dimensional distribution testing and signal processing, subject to adversarial corruptions of the samples. We focus on the relative power of different adversaries, and show that, in contrast to the common wisdom in robust statistics, there exists a strict separation between adaptive adversaries (strong contamination) and oblivious ones (weak contamination) for this task. Specifically, we resolve both the information-theoretic and computational landscapes for robust mean testing. In the exponential-time setting, we establish the tight sample complexity of testing N(0,I) against N(Ξ±v,I), where β₯vβ₯2=1, with an Ξ΅-fraction of adversarial corruptions, to be Ξ~(max(dβββΞ±2,dΞ΅3Ξ±4,min(d2/3Ξ΅2/3Ξ±8/3,dΡα2))), while the complexity against adaptive adversaries is Ξ~(max(dβββΞ±2,dΞ΅2Ξ±4)), which is strictly worse for a large range of vanishing Ξ΅,Ξ±. To the best of our knowledge, ours is the first separation in sample complexity between the strong and weak contamination models. In the polynomial-time setting, we close a gap in the literature by providing a polynomial-time algorithm against adaptive adversaries achieving the above sample complexity Ξ~(max(dβββ/Ξ±2,dΞ΅2/Ξ±4)), and a low-degree lower bound (which complements an existing reduction from planted clique) suggesting that all efficient algorithms require this many samples, even in the oblivious-adversary setting.more » « less
An official website of the United States government

Full Text Available